Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
JAMA Netw Open ; 4(12): e2141096, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34964851

RESUMO

Importance: Most early lung cancers present as pulmonary nodules on imaging, but these can be easily missed on chest radiographs. Objective: To assess if a novel artificial intelligence (AI) algorithm can help detect pulmonary nodules on radiographs at different levels of detection difficulty. Design, Setting, and Participants: This diagnostic study included 100 posteroanterior chest radiograph images taken between 2000 and 2010 of adult patients from an ambulatory health care center in Germany and a lung image database in the US. Included images were selected to represent nodules with different levels of detection difficulties (from easy to difficult), and comprised both normal and nonnormal control. Exposures: All images were processed with a novel AI algorithm, the AI Rad Companion Chest X-ray. Two thoracic radiologists established the ground truth and 9 test radiologists from Germany and the US independently reviewed all images in 2 sessions (unaided and AI-aided mode) with at least a 1-month washout period. Main Outcomes and Measures: Each test radiologist recorded the presence of 5 findings (pulmonary nodules, atelectasis, consolidation, pneumothorax, and pleural effusion) and their level of confidence for detecting the individual finding on a scale of 1 to 10 (1 representing lowest confidence; 10, highest confidence). The analyzed metrics for nodules included sensitivity, specificity, accuracy, and receiver operating characteristics curve area under the curve (AUC). Results: Images from 100 patients were included, with a mean (SD) age of 55 (20) years and including 64 men and 36 women. Mean detection accuracy across the 9 radiologists improved by 6.4% (95% CI, 2.3% to 10.6%) with AI-aided interpretation compared with unaided interpretation. Partial AUCs within the effective interval range of 0 to 0.2 false positive rate improved by 5.6% (95% CI, -1.4% to 12.0%) with AI-aided interpretation. Junior radiologists saw greater improvement in sensitivity for nodule detection with AI-aided interpretation as compared with their senior counterparts (12%; 95% CI, 4% to 19% vs 9%; 95% CI, 1% to 17%) while senior radiologists experienced similar improvement in specificity (4%; 95% CI, -2% to 9%) as compared with junior radiologists (4%; 95% CI, -3% to 5%). Conclusions and Relevance: In this diagnostic study, an AI algorithm was associated with improved detection of pulmonary nodules on chest radiographs compared with unaided interpretation for different levels of detection difficulty and for readers with different experience.


Assuntos
Algoritmos , Neoplasias Pulmonares/diagnóstico por imagem , Adulto , Inteligência Artificial , Feminino , Alemanha , Humanos , Masculino , Pessoa de Meia-Idade , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Radiografia Torácica , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem
2.
Neuroimage ; 219: 117012, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32526386

RESUMO

Traditional neuroimage analysis pipelines involve computationally intensive, time-consuming optimization steps, and thus, do not scale well to large cohort studies with thousands or tens of thousands of individuals. In this work we propose a fast and accurate deep learning based neuroimaging pipeline for the automated processing of structural human brain MRI scans, replicating FreeSurfer's anatomical segmentation including surface reconstruction and cortical parcellation. To this end, we introduce an advanced deep learning architecture capable of whole-brain segmentation into 95 classes. The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both cortical and subcortical structures. Further, we perform fast cortical surface reconstruction and thickness analysis by introducing a spectral spherical embedding and by directly mapping the cortical labels from the image to the surface. This approach provides a full FreeSurfer alternative for volumetric analysis (in under 1 â€‹min) and surface-based thickness analysis (within only around 1 â€‹h runtime). For sustainability of this approach we perform extensive validation: we assert high segmentation accuracy on several unseen datasets, measure generalizability and demonstrate increased test-retest reliability, and high sensitivity to group differences in dementia.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Software
3.
Magn Reson Med ; 83(4): 1471-1483, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31631409

RESUMO

PURPOSE: Introduce and validate a novel, fast, and fully automated deep learning pipeline (FatSegNet) to accurately identify, segment, and quantify visceral and subcutaneous adipose tissue (VAT and SAT) within a consistent, anatomically defined abdominal region on Dixon MRI scans. METHODS: FatSegNet is composed of three stages: (a) Consistent localization of the abdominal region using two 2D-Competitive Dense Fully Convolutional Networks (CDFNet), (b) Segmentation of adipose tissue on three views by independent CDFNets, and (c) View aggregation. FatSegNet is validated by: (1) comparison of segmentation accuracy (sixfold cross-validation), (2) test-retest reliability, (3) generalizability to randomly selected manually re-edited cases, and (4) replication of age and sex effects in the Rhineland Study-a large prospective population cohort. RESULTS: The CDFNet demonstrates increased accuracy and robustness compared to traditional deep learning networks. FatSegNet Dice score outperforms manual raters on VAT (0.850 vs. 0.788) and produces comparable results on SAT (0.975 vs. 0.982). The pipeline has excellent agreement for both test-retest (ICC VAT 0.998 and SAT 0.996) and manual re-editing (ICC VAT 0.999 and SAT 0.999). CONCLUSIONS: FatSegNet generalizes well to different body shapes, sensitively replicates known VAT and SAT volume effects in a large cohort study and permits localized analysis of fat compartments. Furthermore, it can reliably analyze a 3D Dixon MRI in ∼1 minute, providing an efficient and validated pipeline for abdominal adipose tissue analysis in the Rhineland Study.


Assuntos
Aprendizado Profundo , Tecido Adiposo/diagnóstico por imagem , Estudos de Coortes , Imageamento por Ressonância Magnética , Estudos Prospectivos , Reprodutibilidade dos Testes
4.
Neuroimage ; 195: 11-22, 2019 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-30926511

RESUMO

We introduce Bayesian QuickNAT for the automated quality control of whole-brain segmentation on MRI T1 scans. Next to the Bayesian fully convolutional neural network, we also present inherent measures of segmentation uncertainty that allow for quality control per brain structure. For estimating model uncertainty, we follow a Bayesian approach, wherein, Monte Carlo (MC) samples from the posterior distribution are generated by keeping the dropout layers active at test time. Entropy over the MC samples provides a voxel-wise model uncertainty map, whereas expectation over the MC predictions provides the final segmentation. Next to voxel-wise uncertainty, we introduce four metrics to quantify structure-wise uncertainty in segmentation for quality control. We report experiments on four out-of-sample datasets comprising of diverse age range, pathology and imaging artifacts. The proposed structure-wise uncertainty metrics are highly correlated with the Dice score estimated with manual annotation and therefore present an inherent measure of segmentation quality. In particular, the intersection over union over all the MC samples is a suitable proxy for the Dice score. In addition to quality control at scan-level, we propose to incorporate the structure-wise uncertainty as a measure of confidence to do reliable group analysis on large data repositories. We envisage that the introduced uncertainty metrics would help assess the fidelity of automated deep learning based segmentation methods for large-scale population studies, as they enable automated quality control and group analyses in processing large data repositories.


Assuntos
Encéfalo/fisiologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Teorema de Bayes , Humanos , Incerteza
5.
Med Image Anal ; 52: 24-41, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30468970

RESUMO

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.


Assuntos
Extração de Catarata/instrumentação , Aprendizado Profundo , Instrumentos Cirúrgicos , Algoritmos , Humanos , Gravação em Vídeo
6.
Neuroimage ; 186: 713-727, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30502445

RESUMO

Whole brain segmentation from structural magnetic resonance imaging (MRI) is a prerequisite for most morphological analyses, but is computationally intense and can therefore delay the availability of image markers after scan acquisition. We introduce QuickNAT, a fully convolutional, densely connected neural network that segments a MRI brain scan in 20 s. To enable training of the complex network with millions of learnable parameters using limited annotated data, we propose to first pre-train on auxiliary labels created from existing segmentation software. Subsequently, the pre-trained model is fine-tuned on manual labels to rectify errors in auxiliary labels. With this learning strategy, we are able to use large neuroimaging repositories without manual annotations for training. In an extensive set of evaluations on eight datasets that cover a wide age range, pathology, and different scanners, we demonstrate that QuickNAT achieves superior segmentation accuracy and reliability in comparison to state-of-the-art methods, while being orders of magnitude faster. The speed up facilitates processing of large data repositories and supports translation of imaging biomarkers by making them available within seconds for fast clinical decision making.


Assuntos
Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neuroanatomia/métodos , Neuroimagem/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Encéfalo/diagnóstico por imagem , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Pessoa de Meia-Idade , Adulto Jovem
7.
Biomed Opt Express ; 8(8): 3627-3642, 2017 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-28856040

RESUMO

Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.

8.
Artif Intell Med ; 72: 1-11, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27664504

RESUMO

BACKGROUND: In clinical research, the primary interest is often the time until occurrence of an adverse event, i.e., survival analysis. Its application to electronic health records is challenging for two main reasons: (1) patient records are comprised of high-dimensional feature vectors, and (2) feature vectors are a mix of categorical and real-valued features, which implies varying statistical properties among features. To learn from high-dimensional data, researchers can choose from a wide range of methods in the fields of feature selection and feature extraction. Whereas feature selection is well studied, little work focused on utilizing feature extraction techniques for survival analysis. RESULTS: We investigate how well feature extraction methods can deal with features having varying statistical properties. In particular, we consider multiview spectral embedding algorithms, which specifically have been developed for these situations. We propose to use random survival forests to accurately determine local neighborhood relations from right censored survival data. We evaluated 10 combinations of feature extraction methods and 6 survival models with and without intrinsic feature selection in the context of survival analysis on 3 clinical datasets. Our results demonstrate that for small sample sizes - less than 500 patients - models with built-in feature selection (Cox model with ℓ1 penalty, random survival forest, and gradient boosted models) outperform feature extraction methods by a median margin of 6.3% in concordance index (inter-quartile range: [-1.2%;14.6%]). CONCLUSIONS: If the number of samples is insufficient, feature extraction methods are unable to reliably identify the underlying manifold, which makes them of limited use in these situations. For large sample sizes - in our experiments, 2500 samples or more - feature extraction methods perform as well as feature selection methods.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Análise de Sobrevida , Árvores de Decisões , Humanos , Informática Médica , Máquina de Vetores de Suporte
9.
Med Image Anal ; 34: 13-29, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27338173

RESUMO

In this paper, we propose metric Hashing Forests (mHF) which is a supervised variant of random forests tailored for the task of nearest neighbor retrieval through hashing. This is achieved by training independent hashing trees that parse and encode the feature space such that local class neighborhoods are preserved and encoded with similar compact binary codes. At the level of each internal node, locality preserving projections are employed to project data to a latent subspace, where separability between dissimilar points is enhanced. Following which, we define an oblique split that maximally preserves this separability and facilitates defining local neighborhoods of similar points. By incorporating the inverse-lookup search scheme within the mHF, we can then effectively mitigate pairwise neuron similarity comparisons, which allows for scalability to massive databases with little additional time overhead. Exhaustive experimental validations on 22,265 neurons curated from over 120 different archives demonstrate the superior efficacy of mHF in terms of its retrieval performance and precision of classification in contrast to state-of-the-art hashing and metric learning based methods. We conclude that the proposed method can be utilized effectively for similarity-preserving retrieval and categorization in large neuron databases.


Assuntos
Aprendizado de Máquina , Neurônios/classificação , Arquivos , Bases de Dados Factuais , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
Neuroinformatics ; 14(4): 369-85, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27155864

RESUMO

The steadily growing amounts of digital neuroscientific data demands for a reliable, systematic, and computationally effective retrieval algorithm. In this paper, we present Neuron-Miner, which is a tool for fast and accurate reference-based retrieval within neuron image databases. The proposed algorithm is established upon hashing (search and retrieval) technique by employing multiple unsupervised random trees, collectively called as Hashing Forests (HF). The HF are trained to parse the neuromorphological space hierarchically and preserve the inherent neuron neighborhoods while encoding with compact binary codewords. We further introduce the inverse-coding formulation within HF to effectively mitigate pairwise neuron similarity comparisons, thus allowing scalability to massive databases with little additional time overhead. The proposed hashing tool has superior approximation of the true neuromorphological neighborhood with better retrieval and ranking performance in comparison to existing generalized hashing methods. This is exhaustively validated by quantifying the results over 31266 neuron reconstructions from Neuromorpho.org dataset curated from 147 different archives. We envisage that finding and ranking similar neurons through reference-based querying via Neuron Miner would assist neuroscientists in objectively understanding the relationship between neuronal structure and function for applications in comparative anatomy or diagnosis.


Assuntos
Encéfalo/citologia , Mineração de Dados , Processamento de Imagem Assistida por Computador/métodos , Neurônios/citologia , Software , Algoritmos , Animais , Bases de Dados Factuais , Humanos , Aprendizado de Máquina
11.
Med Image Anal ; 32: 1-17, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27035487

RESUMO

In this paper, we propose a supervised domain adaptation (DA) framework for adapting decision forests in the presence of distribution shift between training (source) and testing (target) domains, given few labeled examples. We introduce a novel method for DA through an error-correcting hierarchical transfer relaxation scheme with domain alignment, feature normalization, and leaf posterior reweighting to correct for the distribution shift between the domains. For the first time we apply DA to the challenging problem of extending in vitro trained forests (source domain) for in vivo applications (target domain). The proof-of-concept is provided for in vivo characterization of atherosclerotic tissues using intravascular ultrasound signals, where presence of flowing blood is a source of distribution shift between the two domains. This potentially leads to misclassification upon direct deployment of in vitro trained classifier, thus motivating the need for DA as obtaining reliable in vivo training labels is often challenging if not infeasible. Exhaustive validations and parameter sensitivity analysis substantiate the reliability of the proposed DA framework and demonstrates improved tissue characterization performance for scenarios where adaptation is conducted in presence of only a few examples. The proposed method can thus be leveraged to reduce annotation costs and improve computational efficiency over conventional retraining approaches.


Assuntos
Circulação Coronária , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Ultrassonografia/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
Head Neck ; 38(5): 653-69, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-25532458

RESUMO

BACKGROUND: Evaluation of molecular pathology markers using a computer-aided quantitative assessment framework would help to assess the altered states of cellular proliferation, hypoxia, and neoangiogenesis in oral submucous fibrosis and could improve diagnostic interpretation in gauging its malignant potentiality. METHODS: Immunohistochemical (IHC) expression of c-Myc, hypoxia-inducible factor-1-alpha (HIF-1α), vascular endothelial growth factor (VEGF), VEGFRII, and CD105 were evaluated in 58 biopsies of oral submucous fibrosis using computer-aided quantification. After digital stain separation of original chromogenic IHC images, quantification of the diaminobenzidine (DAB) reaction pattern was performed based on intensity and extent of cytoplasmic, nuclear, and stromal expression. RESULTS: Assessment of molecular expression proposed that c-Myc and HIF-1α may be used as strong screening markers, VEGF for risk-stratification and VEGFRII and CD105 for prognosis of precancer into oral cancer. CONCLUSION: Our analysis indicated that the proposed method can help in establishing IHC as an effective quantitative immunoassay for molecular pathology and alleviate diagnostic ambiguities in the clinical decision process.


Assuntos
Biomarcadores Tumorais/metabolismo , Diagnóstico por Computador , Fibrose Oral Submucosa/diagnóstico , Fibrose Oral Submucosa/metabolismo , Adulto , Progressão da Doença , Endoglina/metabolismo , Feminino , Humanos , Subunidade alfa do Fator 1 Induzível por Hipóxia/metabolismo , Imuno-Histoquímica/métodos , Masculino , Patologia Molecular , Proteínas Proto-Oncogênicas c-myc/metabolismo , Fator A de Crescimento do Endotélio Vascular/metabolismo , Receptor 2 de Fatores de Crescimento do Endotélio Vascular/metabolismo
13.
F1000Res ; 5: 2676, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28713544

RESUMO

Ensemble methods have been successfully applied in a wide range of scenarios, including survival analysis. However, most ensemble models for survival analysis consist of models that all optimize the same loss function and do not fully utilize the diversity in available models. We propose heterogeneous survival ensembles that combine several survival models, each optimizing a different loss during training. We evaluated our proposed technique in the context of the Prostate Cancer DREAM Challenge, where the objective was to predict survival of patients with metastatic, castrate-resistant prostate cancer from patient records of four phase III clinical trials. Results demonstrate that a diverse set of survival models were preferred over a single model and that our heterogeneous ensemble of survival models outperformed all competing methods with respect to predicting the exact time of death in the Prostate Cancer DREAM Challenge.

14.
IEEE J Biomed Health Inform ; 20(2): 606-14, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25700476

RESUMO

Intravascular imaging using ultrasound or optical coherence tomography (OCT) is predominantly used to adjunct clinical information in interventional cardiology. OCT provides high-resolution images for detailed investigation of atherosclerosis-induced thickening of the lumen wall resulting in arterial blockage and triggering acute coronary events. However, the stochastic uncertainty of speckles limits effective visual investigation over large volume of pullback data, and clinicians are challenged by their inability to investigate subtle variations in the lumen topology associated with plaque vulnerability and onset of necrosis. This paper presents a lumen segmentation method using OCT imaging physics-based graph representation of signals and random walks image segmentation approaches. The edge weights in the graph are assigned incorporating OCT signal attenuation physics models. Optical backscattering maxima is tracked along each A-scan of OCT and is subsequently refined using global graylevel statistics and used for initializing seeds for the random walks image segmentation. Accuracy of lumen versus tunica segmentation has been measured on 15 in vitro and 6 in vivo pullbacks, each with 150-200 frames using 1) Cohen's kappa coefficient (0.9786 ±0.0061) measured with respect to cardiologist's annotation and 2) divergence of histogram of the segments computed with Kullback-Leibler (5.17 ±2.39) and Bhattacharya measures (0.56 ±0.28). High segmentation accuracy and consistency substantiates the characteristics of this method to reliably segment lumen across pullbacks in the presence of vulnerability cues and necrotic pool and has a deterministic finite time-complexity. This paper in general also illustrates the development of methods and framework for tissue classification and segmentation incorporating cues of tissue-energy interaction physics in imaging.


Assuntos
Vasos Coronários/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Ultrassonografia de Intervenção/métodos , Humanos , Espalhamento de Radiação
15.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 627-34, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485432

RESUMO

In this paper, we introduce a framework for simulating intravascular ultrasound (IVUS) images and radiofrequency (RF) signals from histology image counterparts. We modeled the wave propagation through the Westervelt equation, which is solved explicitly with a finite differences scheme in polar coordinates by taking into account attenuation and non-linear effects. Our results demonstrate good correlation for textural and spectral information driven from simulated IVUS data in contrast to real data, acquired with single-element mechanically rotating 40 MHZ transducer, as ground truth.


Assuntos
Interpretação de Imagem Assistida por Computador/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Microscopia/instrumentação , Microscopia/métodos , Imagens de Fantasmas , Ultrassonografia de Intervenção/instrumentação , Ultrassonografia de Intervenção/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Ondas de Rádio , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
Artigo em Inglês | MEDLINE | ID: mdl-25333098

RESUMO

Many microscopic imaging modalities suffer from the problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artifacts. A typical example of this is the unwanted seam when stitching images to obtain a whole slide image (WSI). Elimination of shading plays an essential role for subsequent image processing such as segmentation, registration, or tracking. In this paper, we propose two new retrospective shading correction algorithms for WSI targeted to two common forms of WSI: multiple image tiles before mosaicking and an already-stitched image. Both methods leverage on recent achievements in matrix rank minimization and sparse signal recovery. We show how the classic shading problem in microscopy can be reformulated as a decomposition problem of low-rank and sparse components, which seeks an optimal separation of the foreground objects of interest and the background illumination field. Additionally, a sparse constraint is introduced in the Fourier domain to ensure the smoothness of the recovered background. Extensive qualitative and quantitative validation on both synthetic and real microscopy images demonstrates superior performance of the proposed methods in shading removal in comparison with a well-established method in ImageJ.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
J Pathol Inform ; 4: 35, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24524001

RESUMO

BACKGROUND: Oral submucous fibrosis (OSF) is a pre-cancerous condition with features of chronic, inflammatory and progressive sub-epithelial fibrotic disorder of the buccal mucosa. In this study, malignant potentiality of OSF has been assessed by quantification of immunohistochemical expression of epithelial prime regulator-p63 molecule in correlation to its malignant (oral squamous cell carcinoma [OSCC] and normal counterpart [normal oral mucosa [NOM]). Attributes of spatial extent and distribution of p63(+) expression in the epithelium have been investigated. Further, a correlated assessment of histopathological attributes inferred from H&E staining and their mathematical counterparts (molecular pathology of p63) have been proposed. The suggested analytical framework envisaged standardization of the immunohistochemistry evaluation procedure for the molecular marker, using computer-aided image analysis, toward enhancing its prognostic value. SUBJECTS AND METHODS: In histopathologically confirmed OSF, OSCC and NOM tissue sections, p63(+) nuclei were localized and segmented by identifying regional maxima in plateau-like intensity spatial profiles of nuclei. The clustered nuclei were localized and segmented by identifying concave points in the morphometry and by marker-controlled watersheds. Voronoi tessellations were constructed around nuclei centroids and mean values of spatial-relation metrics such as tessellation area, tessellation perimeter, roundness factor and disorder of the area were extracted. Morphology and extent of expression are characterized by area, diameter, perimeter, compactness, eccentricity and density, fraction of p63(+) expression and expression distance of p63(+) nuclei. RESULTS: Correlative framework between histopathological features characterizing malignant potentiality and their quantitative p63 counterparts was developed. Statistical analyses of mathematical trends were evaluated between different biologically relevant combinations: (i) NOM to oral submucous fibrosis without dysplasia (OSFWT) (ii) NOM to oral submucous fibrosis with dysplasia (OSFWD) (iii) OSFWT-OSFWD (iv) OSFWD-OSCC. Significant histopathogical correlates and their corroborative mathematical features, inferred from p63 staining, were also investigated into. CONCLUSION: Quantitative assessment and correlative analysis identified mathematical features related to hyperplasia, cellular stratification, differentiation and maturation, shape and size, nuclear crowding and nucleocytoplasmic ratio. It is envisaged that this approach for analyzing the p63 expression and its distribution pattern may help to establish it as a quantitative bio-marker to predict the malignant potentiality and progression. The proposed work would be a value addition to the gold standard by incorporating an observer-independent framework for the associated molecular pathology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...